10 research outputs found

    Time-based self-supervised learning for Wireless Capsule Endoscopy

    Full text link
    State-of-the-art machine learning models, and especially deep learning ones, are significantly data-hungry; they require vast amounts of manually labeled samples to function correctly. However, in most medical imaging fields, obtaining said data can be challenging. Not only the volume of data is a problem, but also the imbalances within its classes; it is common to have many more images of healthy patients than of those with pathology. Computer-aided diagnostic systems suffer from these issues, usually over-designing their models to perform accurately. This work proposes using self-supervised learning for wireless endoscopy videos by introducing a custom-tailored method that does not initially need labels or appropriate balance. We prove that using the inferred inherent structure learned by our method, extracted from the temporal axis, improves the detection rate on several domain-specific applications even under severe imbalance. State-of-the-art results are achieved in polyp detection, with 95.00 ± 2.09% Area Under the Curve, and 92.77 ± 1.20% accuracy in the CAD-CAP dataset

    Generic Feature Learning for Wireless Capsule Endoscopy Analysis

    Full text link
    The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase)

    Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge

    Get PDF
    The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field

    Sequential Models for Endoluminal Image Classification

    Full text link
    Wireless Capsule Endoscopy (WCE) is a procedure to examine the human digestive system for potential mucosal polyps, tumours, or bleedings using an encapsulated camera. This work focuses on polyp detection within WCE videos through Machine Learning. When using Machine Learning in the medical field, scarce and unbalanced datasets often make it hard to receive a satisfying performance. We claim that using Sequential Models in order to take the temporal nature of the data into account improves the performance of previous approaches. Thus, we present a bidirectional Long Short-Term Memory Network (BLSTM), a sequential network that is particularly designed for temporal data. We find the BLSTM Network outperforms non-sequential architectures and other previous models, receiving a final Area under the Curve of 93.83%. Experiments show that our method of extracting spatial and temporal features yields better performance and could be a possible method to decrease the time needed by physicians to analyse the video material

    Artificial intelligence for the detection of polyps or cancer with colon capsule endoscopy

    Full text link
    Colorectal cancer is common and can be devastating, with long-term survival rates vastly improved by early diagnosis. Colon capsule endoscopy (CCE) is increasingly recognised as a reliable option for colonic surveillance, but widespread adoption has been slow for several reasons, including the time-consuming reading process of the CCE recording. Automated image recognition and artificial intelligence (AI) are appealing solutions in CCE. Through a review of the currently available and developmental technologies, we discuss how AI is poised to deliver at the forefront of CCE in the coming years. Current practice for CCE reporting often involves a two-step approach, with a 'pre-reader' and 'validator'. This requires skilled and experienced readers with a significant time commitment. Therefore, CCE is well-positioned to reap the benefits of the ongoing digital innovation. This is likely to initially involve an automated AI check of finished CCE evaluations as a quality control measure. Once felt reliable, AI could be used in conjunction with a 'pre-reader', before adopting more of this role by sending provisional results and abnormal frames to the validator. With time, AI would be able to evaluate the findings more thoroughly and reduce the input required from human readers and ultimately autogenerate a highly accurate report and recommendation of therapy, if required, for any pathology identified. As with many medical fields reliant on image recognition, AI will be a welcome aid in CCE. Initially, this will be as an adjunct to 'double-check' that nothing has been missed, but with time will hopefully lead to a faster, more convenient diagnostic service for the screening population

    WCE polyp detection with triplet based embeddings

    No full text
    Wireless capsule endoscopy is a medical procedure used to visualize the entire gastrointestinal tractand to diagnose intestinal conditions, such as polyps or bleeding. Current analyses are performedby manually inspecting nearly each one of the frames of the video, a tedious and error-prone task.Automatic image analysis methods can be used to reduce the time needed for physicians to evaluate acapsule endoscopy video. However these methods are still in a research phase.In this paper we focus on computer-aided polyp detection in capsule endoscopy images. This is achallenging problem because of the diversity of polyp appearance, the imbalanced dataset structureand the scarcity of data. We have developed a new polyp computer-aided decision system thatcombines a deep convolutional neural network and metric learning. The key point of the method isthe use of the Triplet Loss function with the aim of improving feature extraction from the imageswhen having small dataset. The Triplet Loss function allows to train robust detectors by forcingimages from the same category to be represented by similar embedding vectors while ensuring thatimages from different categories are represented by dissimilar vectors. Empirical results show ameaningful increase of AUC values compared to state-of-the-art methods.A good performance is not the only requirement when considering the adoption of this technologyto clinical practice. Trust and explainability of decisions are as important as performance. Withthis purpose, we also provide a method to generate visual explanations of the outcome of our polypdetector. These explanations can be used to build a physician's trust in the system and also to conveyinformation about the inner working of the method to the designer for debugging purposes

    Mathematical Abilities in School-Aged Children: A Structural Magnetic Resonance Imaging Analysis With Radiomics

    Full text link
    Structural magnetic resonance imaging (sMRI) studies have shown that children that differ in some mathematical abilities show differences in gray matter volume mainly in parietal and frontal regions that are involved in number processing, attentional control, and memory. In the present study, a structural neuroimaging analysis based on radiomics and machine learning models is presented with the aim of identifying the brain areas that better predict children's performance in a variety of mathematical tests. A sample of 77 school-aged children from third to sixth grade were administered four mathematical tests: Math fluency, Calculation, Applied problems and Quantitative concepts as well as a structural brain imaging scan. By extracting radiomics related to the shape, intensity, and texture of specific brain areas, we observed that areas from the frontal, parietal, temporal, and occipital lobes, basal ganglia, and limbic system, were differentially related to children's performance in the mathematical tests. sMRI-based analyses in the context of mathematical performance have been mainly focused on volumetric measures. However, the results for radiomics-based analysis showed that for these areas, texture features were the most important for the regression models, while volume accounted for less than 15% of the shape importance. These findings highlight the potential of radiomics for more in-depth analysis of medical images for the identification of brain areas related to mathematical abilities

    Artificial intelligence to improve polyp detection and screening time in colon capsule endoscopy

    Full text link
    Colon Capsule Endoscopy (CCE) is a minimally invasive procedure which is increasingly being used as an alternative to conventional colonoscopy. Videos recorded by the capsule cameras are long and require one or more experts' time to review and identify polyps or other potential intestinal problems that can lead to major health issues. We developed and tested a multi-platform web application, AI-Tool, which embeds a Convolution Neural Network (CNN) to help CCE reviewers. With the help of artificial intelligence, AI-Tool is able to detect images with high probability of containing a polyp and prioritize them during the reviewing process. With the collaboration of 3 experts that reviewed 18 videos, we compared the classical linear review method using RAPID Reader Software v9.0 and the new software we present. Applying the new strategy, reviewing time was reduced by a factor of 6 and polyp detection sensitivity was increased from 81.08 to 87.80%

    Cardiac aging synthesis from cross-sectional data with conditional generative adversarial networks

    Full text link
    Age has important implications for health, and understanding how age manifests in the human body is the first step for a potential intervention. This becomes especially important for cardiac health, since age is the main risk factor for development of cardiovascular disease. Data-driven modeling of age progression has been conducted successfully in diverse applications such as face or brain aging. While longitudinal data is the preferred option for training deep learning models, collecting such a dataset is usually very costly, especially in medical imaging. In this work, a conditional generative adversarial network is proposed to synthesize older and younger versions of a heart scan by using only cross-sectional data. We train our model with more than 14,000 different scans from the UK Biobank. The induced modifications focused mainly on the interventricular septum and the aorta, which is consistent with the existing literature in cardiac aging. We evaluate the results by measuring image quality, the mean absolute error for predicted age using a pre-trained regressor, and demonstrate the application of synthetic data for counter-balancing biased datasets. The results suggest that the proposed approach is able to model realistic changes in the heart using only cross-sectional data and that these data can be used to correct age bias in a dataset

    Key research questions for implementation of artificial intelligence in capsule endoscopy

    Full text link
    Background: Artificial intelligence (AI) is rapidly infiltrating multiple areas in medicine, with gastrointestinal endoscopy paving the way in both research and clinical applications. Multiple challenges associated with the incorporation of AI in endoscopy are being addressed in recent consensus documents. Objectives: In the current paper, we aimed to map future challenges and areas of research for the incorporation of AI in capsule endoscopy (CE) practice. Design: Modified three-round Delphi consensus online survey. Methods: The study design was based on a modified three-round Delphi consensus online survey distributed to a group of CE and AI experts. Round one aimed to map out key research statements and challenges for the implementation of AI in CE. All queries addressing the same questions were merged into a single issue. The second round aimed to rank all generated questions during round one and to identify the top-ranked statements with the highest total score. Finally, the third round aimed to redistribute and rescore the top-ranked statements. Results: Twenty-one (16 gastroenterologists and 5 data scientists) experts participated in the survey. In the first round, 48 statements divided into seven themes were generated. After scoring all statements and rescoring the top 12, the question of AI use for identification and grading of small bowel pathologies was scored the highest (mean score 9.15), correlation of AI and human expert reading-second (9.05), and real-life feasibility-third (9.0). Conclusion: In summary, our current study points out a roadmap for future challenges and research areas on our way to fully incorporating AI in CE reading
    corecore